-
-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Model] Allow loading from original Mistral format #8168
[Model] Allow loading from original Mistral format #8168
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
/ready |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is reasonable overall but are these consolidated checkpoints going to be actively used going forward?
We tend to want to conform to HF as a standard, so if these checkpoints don't load in transformers now then I think the community will continue to have interest in having those be converted to canonical "HF-style" checkpoints
cache_dir: Optional[str], | ||
index_file: str, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
super nit: I think it makes more sense to have cache_dir after index_file, similar to how hf_hub_download is called
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
happy to change
@@ -104,6 +110,56 @@ def get_config( | |||
return config | |||
|
|||
|
|||
def load_params_config(model, revision) -> PretrainedConfig: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason why you have this new config? It seems to have the same information you would have in the config.json, just named differently
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main reason is because the original format is always stored in params.json which accompanies the consolidated.safetensors checkpoints.
Guess there are two problems with config.json
-
- They are tied quite heavily to transformers. So if one wants to add a new, no transformers-formatted model it doesn't make too much sense to follow transformers config style
-
- They tend to be quite bloated, e.g. has info that's outdated such as
sliding_window
here: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3/blob/e0bc86c23ce5aae1db576c8cca6f06f1f73af2db/config.json#L19
- They tend to be quite bloated, e.g. has info that's outdated such as
vllm/model_executor/models/llama.py
Outdated
# Mistral/Llama models can also be loaded with --load-format consolidated | ||
# from consolidated.safetensors checkpoints | ||
consolidated_mapping = { | ||
"layers": "model.layers", | ||
"attention": "self_attn", | ||
"wq": "q_proj", | ||
"wk": "k_proj", | ||
"wv": "v_proj", | ||
"wo": "o_proj", | ||
"attention_norm": "input_layernorm", | ||
"feed_forward": "mlp", | ||
"w1": "gate_proj", | ||
"w2": "down_proj", | ||
"w3": "up_proj", | ||
"ffn_norm": "post_attention_layernorm", | ||
"tok_embeddings": "model.embed_tokens", | ||
"output": "lm_head", | ||
"norm": "model.norm" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a standard naming scheme used by Llama models as well? I have only seen Mistral models with these style of checkpoints
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah the original LLama checkpoints have this naming as well: https://github.com/meta-llama/llama/blob/8fac8befd776bc03242fe7bc2236cdb41b6c609c/llama/model.py#L207
(guess most people use the HF format indeed though)
Hey @mgoin, Yes at Mistral we usually only release the consolidated checkpoints to begin with (see table here). The community then usually converts the models to transformers but this takes some time and then it takes even more time to wait for the next transformers release. So yes, I think the consolidated format will surely continued to be used! I understand that it's easier to just conform to HF-style, but I guess VLLM also has GGUF support and it could make sense to start allowing more and more support for other formats? Guess in the future VLLM can run any nn.Module regardless of whether it's in transformers or not no? The biggest reason why I think it makes sense to allow also for the consolidated format is so that users could have day-0 inference support for Mistral models for future model releases. VLLM is essentially implementing all the model architectures natively in the model register, so it's just about "correctly" loading the file format no? It's a bit painful to have to adapt mistral format to transformers format to allow inference in VLLM. With mistral-common in VLLM and with consolidated.safetensors format loading, models like Mistral-Nemo should be supported out of the box on day 0. Happy to adapt the PR in a way that fits the design better! |
vllm/config.py
Outdated
@@ -744,6 +747,7 @@ class LoadFormat(str, enum.Enum): | |||
SHARDED_STATE = "sharded_state" | |||
GGUF = "gguf" | |||
BITSANDBYTES = "bitsandbytes" | |||
CONSOLIDATED = "consolidated" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CONSOLIDATED is a bit broad as a name no ? Especially since there is SHARDED_STATE above, I think it could lead to confusion.
Thoughts on LoadFormat.MISTRAL ? This makes it clear that the intent is to have us support / maintain the integration of our models into vLLM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes makes sense!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on calling it "MISTRAL" format
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, it's better to be explicit if possible
Thanks for the feedback @timlacroix @mgoin - think everything should be incorporated. Let me know if there is anything else that should be changed! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I appreciate your explanation and justification @patrickvonplaten ! I think this is in a good place, my last gripe is with load_params_config
since it would be good to future-proof this for other load formats in the future that may require similar changes. Look in the comments for my proposal
tests/models/test_mistral.py
Outdated
# test that both HF format and mistral format work | ||
load_format = "mistral" if model.endswith("v0.3") else "auto" | ||
|
||
with vllm_runner(model, | ||
dtype=dtype, | ||
tokenizer_mode="mistral", | ||
load_format=load_format) as vllm_model: | ||
vllm_outputs = vllm_model.generate_greedy_logprobs( | ||
example_prompts, max_tokens, num_logprobs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be best if you specifically made a test for "mistralai/Mistral-7B-Instruct-v0.3" where you test the model load with both "safetensors" and "mistral" (and HF to just ensure reference) since both checkpoint formats are present on that model card.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense! Thanks - will take care of it in a bit
vllm/config.py
Outdated
load_params_config: Load the config from mistral format | ||
(params.json) instead of config.json. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you made some good points on supporting checkpoint formats other than HF. Given that assumption, I think it would be best to future-proof the interface of config overrides.
I would like to propose changing this parameter to something like config_format
that takes a value from a ConfigFormat
enum, defaulting to ConfigFormat.HF_CONFIG or AUTO. Then get_config()
can dispatch based on this value to the specified config parsing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes a lot of sense!
Co-authored-by: Michael Goin <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on getting the CI green, looks like you're close. It does seem like AUTO is causing trouble here, so I would be okay with removing it and defaulting to HF if you can't get it stable. Otherwise this LGTM so we can land when green
Yeah I think it should be fine now (CI is all green). GGUF tests were failing because they didn't default to HF. Fixed it and everything should work fine now I believe! Thanks for guiding me through the PR! |
commit a1d8742 Author: Simon Mo <[email protected]> Date: Mon Sep 9 23:21:00 2024 -0700 Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (vllm-project#8319) commit 6cd5e5b Author: Dipika Sikka <[email protected]> Date: Mon Sep 9 23:02:52 2024 -0400 [Misc] Fused MoE Marlin support for GPTQ (vllm-project#8217) commit c7cb5c3 Author: Kyle Sayers <[email protected]> Date: Mon Sep 9 16:27:26 2024 -0400 [Misc] GPTQ Activation Ordering (vllm-project#8135) commit f9b4a2d Author: Vladislav Kruglikov <[email protected]> Date: Mon Sep 9 21:20:46 2024 +0300 [Bugfix] Correct adapter usage for cohere and jamba (vllm-project#8292) commit 58fcc85 Author: Adam Lugowski <[email protected]> Date: Mon Sep 9 11:16:37 2024 -0700 [Frontend] Add progress reporting to run_batch.py (vllm-project#8060) Co-authored-by: Adam Lugowski <[email protected]> commit 08287ef Author: Kyle Mistele <[email protected]> Date: Mon Sep 9 09:45:11 2024 -0500 [Bugfix] Streamed tool calls now more strictly follow OpenAI's format; ensures Vercel AI SDK compatibility (vllm-project#8272) commit 4ef41b8 Author: Alexander Matveev <[email protected]> Date: Sun Sep 8 00:01:51 2024 -0400 [Bugfix] Fix async postprocessor in case of preemption (vllm-project#8267) commit cfe712b Author: Joe Runde <[email protected]> Date: Sat Sep 7 14:03:16 2024 -0600 [CI/Build] Use python 3.12 in cuda image (vllm-project#8133) Signed-off-by: Joe Runde <[email protected]> commit b962ee1 Author: sumitd2 <[email protected]> Date: Sat Sep 7 23:48:40 2024 +0530 ppc64le: Dockerfile fixed, and a script for buildkite (vllm-project#8026) commit 36bf815 Author: Isotr0py <[email protected]> Date: Sun Sep 8 01:45:44 2024 +0800 [Model][VLM] Decouple weight loading logic for `Paligemma` (vllm-project#8269) commit e807125 Author: Isotr0py <[email protected]> Date: Sat Sep 7 16:38:23 2024 +0800 [Model][VLM] Support multi-images inputs for InternVL2 models (vllm-project#8201) commit 9f68e00 Author: Cyrus Leung <[email protected]> Date: Sat Sep 7 16:02:39 2024 +0800 [Bugfix] Fix broken OpenAI tensorizer test (vllm-project#8258) commit ce2702a Author: youkaichao <[email protected]> Date: Fri Sep 6 22:40:46 2024 -0700 [tpu][misc] fix typo (vllm-project#8260) commit 795b662 Author: Wei-Sheng Chin <[email protected]> Date: Fri Sep 6 20:18:16 2024 -0700 Enable Random Prefix Caching in Serving Profiling Tool (benchmark_serving.py) (vllm-project#8241) commit 2f707fc Author: Cyrus Leung <[email protected]> Date: Sat Sep 7 10:57:24 2024 +0800 [Model] Multi-input support for LLaVA (vllm-project#8238) commit 41e95c5 Author: Kyle Mistele <[email protected]> Date: Fri Sep 6 21:49:01 2024 -0500 [Bugfix] Fix Hermes tool call chat template bug (vllm-project#8256) Co-authored-by: Kyle Mistele <[email protected]> commit 12dd715 Author: William Lin <[email protected]> Date: Fri Sep 6 17:48:48 2024 -0700 [misc] [doc] [frontend] LLM torch profiler support (vllm-project#7943) commit 29f49cd Author: Patrick von Platen <[email protected]> Date: Sat Sep 7 01:02:05 2024 +0200 [Model] Allow loading from original Mistral format (vllm-project#8168) Co-authored-by: Michael Goin <[email protected]> commit 23f3222 Author: Dipika Sikka <[email protected]> Date: Fri Sep 6 18:29:03 2024 -0400 [Misc] Remove `SqueezeLLM` (vllm-project#8220) commit 9db52ea Author: rasmith <[email protected]> Date: Fri Sep 6 17:26:09 2024 -0500 [Kernel] [Triton] Memory optimization for awq_gemm and awq_dequantize, 2x throughput (vllm-project#8248) commit 1447c97 Author: Alexey Kondratiev(AMD) <[email protected]> Date: Fri Sep 6 14:51:03 2024 -0400 [CI/Build] Increasing timeout for multiproc worker tests (vllm-project#8203) commit de80783 Author: Rui Qiao <[email protected]> Date: Fri Sep 6 09:18:35 2024 -0700 [Misc] Use ray[adag] dependency instead of cuda (vllm-project#7938) commit e5cab71 Author: afeldman-nm <[email protected]> Date: Fri Sep 6 12:01:14 2024 -0400 [Frontend] Add --logprobs argument to `benchmark_serving.py` (vllm-project#8191) commit baa5467 Author: Nick Hill <[email protected]> Date: Thu Sep 5 20:39:29 2024 -0700 [BugFix] Fix Granite model configuration (vllm-project#8216) commit db3bf7c Author: Jiaxin Shan <[email protected]> Date: Thu Sep 5 18:10:33 2024 -0700 [Core] Support load and unload LoRA in api server (vllm-project#6566) Co-authored-by: Jee Jee Li <[email protected]> commit 2febcf2 Author: sroy745 <[email protected]> Date: Thu Sep 5 13:25:29 2024 -0700 [Documentation][Spec Decode] Add documentation about lossless guarantees in Speculative Decoding in vLLM (vllm-project#7962) commit 2ee4528 Author: Michael Goin <[email protected]> Date: Thu Sep 5 11:09:46 2024 -0400 Move verify_marlin_supported to GPTQMarlinLinearMethod (vllm-project#8165) commit 9da25a8 Author: Alex Brooks <[email protected]> Date: Thu Sep 5 06:48:10 2024 -0600 [MODEL] Qwen Multimodal Support (Qwen-VL / Qwen-VL-Chat) (vllm-project#8029) Signed-off-by: Alex-Brooks <[email protected]> Co-authored-by: DarkLight1337 <[email protected]> commit 8685ba1 Author: [email protected] <[email protected]> Date: Thu Sep 5 17:03:37 2024 +0530 Inclusion of InternVLChatModel In PP_SUPPORTED_MODELS(Pipeline Parallelism) (vllm-project#7860) commit 288a938 Author: Cyrus Leung <[email protected]> Date: Thu Sep 5 18:51:53 2024 +0800 [Doc] Indicate more information about supported modalities (vllm-project#8181) commit e39ebf5 Author: Elfie Guo <[email protected]> Date: Wed Sep 4 22:12:26 2024 -0700 [Core/Bugfix] Add query dtype as per FlashInfer API requirements. (vllm-project#8173) commit ba262c4 Author: Kevin H. Luu <[email protected]> Date: Wed Sep 4 20:33:12 2024 -0700 [ci] Mark LoRA test as soft-fail (vllm-project#8160) Signed-off-by: kevin <[email protected]> commit 4624d98 Author: Woosuk Kwon <[email protected]> Date: Wed Sep 4 20:31:48 2024 -0700 [Misc] Clean up RoPE forward_native (vllm-project#8076) commit 1afc931 Author: William Lin <[email protected]> Date: Wed Sep 4 17:35:36 2024 -0700 [bugfix] >1.43 constraint for openai (vllm-project#8169) Co-authored-by: Michael Goin <[email protected]> commit e01c2be Author: Maureen McElaney <[email protected]> Date: Wed Sep 4 19:50:13 2024 -0400 [Doc] [Misc] Create CODE_OF_CONDUCT.md (vllm-project#8161) commit 32e7db2 Author: Simon Mo <[email protected]> Date: Wed Sep 4 16:34:27 2024 -0700 Bump version to v0.6.0 (vllm-project#8166) commit 008cf88 Author: Harsha vardhan manoj Bikki <[email protected]> Date: Wed Sep 4 16:33:43 2024 -0700 [Neuron] Adding support for adding/ overriding neuron configuration a… (vllm-project#8062) Co-authored-by: Harsha Bikki <[email protected]> commit 77d9e51 Author: Cody Yu <[email protected]> Date: Wed Sep 4 13:23:22 2024 -0700 [MISC] Replace input token throughput with total token throughput (vllm-project#8164) Co-authored-by: Michael Goin <[email protected]> commit e02ce49 Author: Kyle Mistele <[email protected]> Date: Wed Sep 4 15:18:13 2024 -0500 [Feature] OpenAI-Compatible Tools API + Streaming for Hermes & Mistral models (vllm-project#5649) Co-authored-by: constellate <[email protected]> Co-authored-by: Kyle Mistele <[email protected]> commit 561d6f8 Author: Woosuk Kwon <[email protected]> Date: Wed Sep 4 13:05:50 2024 -0700 [CI] Change test input in Gemma LoRA test (vllm-project#8163) commit d1dec64 Author: alexeykondrat <[email protected]> Date: Wed Sep 4 14:57:54 2024 -0400 [CI/Build][ROCm] Enabling LoRA tests on ROCm (vllm-project#7369) Co-authored-by: Simon Mo <[email protected]> commit 2ad2e56 Author: Cody Yu <[email protected]> Date: Wed Sep 4 11:53:25 2024 -0700 [MISC] Consolidate FP8 kv-cache tests (vllm-project#8131) commit d331156 Author: wnma <[email protected]> Date: Wed Sep 4 18:55:37 2024 +0800 [Bugfix] remove post_layernorm in siglip (vllm-project#8106) commit ccd7207 Author: TimWang <[email protected]> Date: Wed Sep 4 14:17:05 2024 +0800 chore: Update check-wheel-size.py to read MAX_SIZE_MB from env (vllm-project#8103) commit 855c262 Author: Cyrus Leung <[email protected]> Date: Wed Sep 4 13:22:17 2024 +0800 [Frontend] Multimodal support in offline chat (vllm-project#8098) commit 2be8ec6 Author: Peter Salas <[email protected]> Date: Tue Sep 3 21:38:21 2024 -0700 [Model] Add Ultravox support for multiple audio chunks (vllm-project#7963) commit e16fa99 Author: Dipika Sikka <[email protected]> Date: Tue Sep 3 22:12:41 2024 -0400 [Misc] Update fbgemmfp8 to use `vLLMParameters` (vllm-project#7972) Co-authored-by: Michael Goin <[email protected]> commit 61f4a93 Author: Woosuk Kwon <[email protected]> Date: Tue Sep 3 18:35:33 2024 -0700 [TPU][Bugfix] Use XLA rank for persistent cache path (vllm-project#8137) commit d4db9f5 Author: Nick Hill <[email protected]> Date: Tue Sep 3 17:57:41 2024 -0700 [Benchmark] Add `--async-engine` option to benchmark_throughput.py (vllm-project#7964) commit 2188a60 Author: Dipika Sikka <[email protected]> Date: Tue Sep 3 17:21:44 2024 -0400 [Misc] Update `GPTQ` to use `vLLMParameters` (vllm-project#7976) commit dc0b606 Author: Simon Mo <[email protected]> Date: Tue Sep 3 14:11:42 2024 -0700 [CI] Change PR remainder to avoid at-mentions (vllm-project#8134) commit 0af3abe Author: Woosuk Kwon <[email protected]> Date: Tue Sep 3 13:29:24 2024 -0700 [TPU][Bugfix] Fix next_token_ids shape (vllm-project#8128) commit f1575dc Author: Kevin H. Luu <[email protected]> Date: Tue Sep 3 13:25:09 2024 -0700 [ci] Fix GHA workflow (vllm-project#8129) Signed-off-by: kevin <[email protected]> commit c02638e Author: tomeras91 <[email protected]> Date: Tue Sep 3 22:37:08 2024 +0300 [CI/Build] make pip install vllm work in macos (for import only) (vllm-project#8118) commit 652c83b Author: Antoni Baum <[email protected]> Date: Tue Sep 3 12:28:25 2024 -0700 [Misc] Raise a more informative exception in add/remove_logger (vllm-project#7750) commit 6d646d0 Author: Alexander Matveev <[email protected]> Date: Tue Sep 3 14:50:29 2024 -0400 [Core] Optimize Async + Multi-step (vllm-project#8050) commit 95a178f Author: Kevin H. Luu <[email protected]> Date: Tue Sep 3 11:32:27 2024 -0700 [CI] Only PR reviewers/committers can trigger CI on PR (vllm-project#8124) Signed-off-by: kevin <[email protected]> commit bd852f2 Author: Cody Yu <[email protected]> Date: Tue Sep 3 10:49:18 2024 -0700 [Performance] Enable chunked prefill and prefix caching together (vllm-project#8120) Co-authored-by: Tao He <[email protected]> Co-authored-by: Juelianqvq <[email protected]> commit ec26653 Author: Isotr0py <[email protected]> Date: Tue Sep 3 21:37:52 2024 +0800 [Bugfix][VLM] Add fallback to SDPA for ViT model running on CPU backend (vllm-project#8061) commit 0fbc669 Author: Woosuk Kwon <[email protected]> Date: Mon Sep 2 20:35:42 2024 -0700 [Bugfix] Fix single output condition in output processor (vllm-project#7881) commit 6e36f4f Author: wang.yuqi <[email protected]> Date: Tue Sep 3 05:20:12 2024 +0800 improve chunked prefill performance [Bugfix] Fix vllm-project#7592 vllm 0.5.4 enable_chunked_prefill throughput is slightly lower than 0.5.3~0.5.0. (vllm-project#7874) commit dd2a6a8 Author: Isotr0py <[email protected]> Date: Mon Sep 2 23:48:56 2024 +0800 [Bugfix] Fix internlm2 tensor parallel inference (vllm-project#8055) commit 4ca65a9 Author: Isotr0py <[email protected]> Date: Mon Sep 2 20:43:26 2024 +0800 [Core][Bugfix] Accept GGUF model without .gguf extension (vllm-project#8056) commit e2b2aa5 Author: Woosuk Kwon <[email protected]> Date: Sun Sep 1 23:09:46 2024 -0700 [TPU] Align worker index with node boundary (vllm-project#7932) commit e6a26ed Author: Lily Liu <[email protected]> Date: Sun Sep 1 21:23:29 2024 -0700 [SpecDecode][Kernel] Flashinfer Rejection Sampling (vllm-project#7244) commit f8d6014 Author: Shawn Tan <[email protected]> Date: Sun Sep 1 21:37:18 2024 -0400 [Model] Add Granite model (vllm-project#7436) Co-authored-by: Nick Hill <[email protected]> commit 5b86b19 Author: Roger Wang <[email protected]> Date: Sun Sep 1 14:46:57 2024 -0700 [Misc] Optional installation of audio related packages (vllm-project#8063) commit 5231f08 Author: Roger Wang <[email protected]> Date: Sat Aug 31 16:35:53 2024 -0700 [Frontend][VLM] Add support for multiple multi-modal items (vllm-project#8049) commit 8423aef Author: Robert Shaw <[email protected]> Date: Sat Aug 31 15:44:03 2024 -0400 [BugFix][Core] Multistep Fix Crash on Request Cancellation (vllm-project#8059) commit 4f5d844 Author: Nicolò Lucchesi <[email protected]> Date: Sat Aug 31 09:27:58 2024 +0200 [Bugfix] Fix ModelScope models in v0.5.5 (vllm-project#8037) commit d05f0a9 Author: Cyrus Leung <[email protected]> Date: Sat Aug 31 13:26:55 2024 +0800 [Bugfix] Fix import error in Phi-3.5-MoE (vllm-project#8052) commit 622f8ab Author: Pavani Majety <[email protected]> Date: Fri Aug 30 22:18:50 2024 -0700 [Bugfix] bugfix and add model test for flashinfer fp8 kv cache. (vllm-project#8013) commit 1248e85 Author: Wenxiang <[email protected]> Date: Sat Aug 31 03:42:57 2024 +0800 [Model] Adding support for MSFT Phi-3.5-MoE (vllm-project#7729) Co-authored-by: Your Name <[email protected]> Co-authored-by: Zeqi Lin <[email protected]> Co-authored-by: Zeqi Lin <[email protected]> commit 2684efc Author: Woosuk Kwon <[email protected]> Date: Fri Aug 30 09:01:26 2024 -0700 [TPU][Bugfix] Fix tpu type api (vllm-project#8035) commit 058344f Author: Kaunil Dhruv <[email protected]> Date: Fri Aug 30 08:21:02 2024 -0700 [Frontend]-config-cli-args (vllm-project#7737) Co-authored-by: Cyrus Leung <[email protected]> Co-authored-by: Kaunil Dhruv <[email protected]> commit 98cef6a Author: Cyrus Leung <[email protected]> Date: Fri Aug 30 23:20:34 2024 +0800 [Core] Increase default `max_num_batched_tokens` for multimodal models (vllm-project#8028) commit f97be32 Author: Jungho Christopher Cho <[email protected]> Date: Sat Aug 31 00:19:27 2024 +0900 [VLM][Model] TP support for ViTs (vllm-project#7186) Co-authored-by: Roger Wang <[email protected]> Co-authored-by: Roger Wang <[email protected]> commit afd39a4 Author: Cyrus Leung <[email protected]> Date: Fri Aug 30 23:03:28 2024 +0800 [Bugfix] Fix import error in Exaone model (vllm-project#8034) commit 2148441 Author: Richard Liu <[email protected]> Date: Fri Aug 30 00:27:40 2024 -0700 [TPU] Support single and multi-host TPUs on GKE (vllm-project#7613) commit dc13e99 Author: Yohan Na <[email protected]> Date: Fri Aug 30 15:34:20 2024 +0900 [MODEL] add Exaone model support (vllm-project#7819) commit 34a0e96 Author: Avshalom Manevich <[email protected]> Date: Fri Aug 30 11:11:39 2024 +0700 [Kernel] changing fused moe kernel chunk size default to 32k (vllm-project#7995) commit 80c7b08 Author: Woosuk Kwon <[email protected]> Date: Thu Aug 29 19:35:29 2024 -0700 [TPU] Async output processing for TPU (vllm-project#8011) commit 428dd14 Author: afeldman-nm <[email protected]> Date: Thu Aug 29 22:19:08 2024 -0400 [Core] Logprobs support in Multi-step (vllm-project#7652) commit 4abed65 Author: Cyrus Leung <[email protected]> Date: Fri Aug 30 08:49:04 2024 +0800 [VLM] Disallow overflowing `max_model_len` for multimodal models (vllm-project#7998) commit 0c785d3 Author: Wei-Sheng Chin <[email protected]> Date: Thu Aug 29 16:48:11 2024 -0700 Add more percentiles and latencies (vllm-project#7759) commit 4664cea Author: chenqianfzh <[email protected]> Date: Thu Aug 29 16:09:08 2024 -0700 support bitsandbytes 8-bit and FP4 quantized models (vllm-project#7445) commit 257afc3 Author: Harsha vardhan manoj Bikki <[email protected]> Date: Thu Aug 29 13:58:14 2024 -0700 [Neuron] Adding support for context-lenght, token-gen buckets. (vllm-project#7885) Co-authored-by: Harsha Bikki <[email protected]> commit 86a677d Author: Dipika Sikka <[email protected]> Date: Thu Aug 29 16:46:55 2024 -0400 [misc] update tpu int8 to use new vLLM Parameters (vllm-project#7973) commit d78789a Author: Isotr0py <[email protected]> Date: Fri Aug 30 03:54:49 2024 +0800 [Bugfix] Fix incorrect vocal embedding shards for GGUF model in tensor parallelism (vllm-project#7954) commit c334b18 Author: kushanam <[email protected]> Date: Thu Aug 29 12:15:04 2024 -0700 extend cuda graph size for H200 (vllm-project#7894) Co-authored-by: youkaichao <[email protected]> commit 6b34215 Author: Pavani Majety <[email protected]> Date: Thu Aug 29 11:53:11 2024 -0700 [Core][Kernels] Enable FP8 KV Cache with Flashinfer backend. + BugFix for kv_cache_dtype=auto (vllm-project#7985) Co-authored-by: Simon Mo <[email protected]> Co-authored-by: Cody Yu <[email protected]> commit 3f60f22 Author: Alexander Matveev <[email protected]> Date: Thu Aug 29 14:18:26 2024 -0400 [Core] Combine async postprocessor and multi-step (vllm-project#7921) commit f205c09 Author: Jonas M. Kübler <[email protected]> Date: Thu Aug 29 07:18:13 2024 +0200 [Bugfix] Unify rank computation across regular decoding and speculative decoding (vllm-project#7899) commit ef99a78 Author: youkaichao <[email protected]> Date: Wed Aug 28 21:27:06 2024 -0700 Revert "[Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available." (vllm-project#7982) commit 74d5543 Author: Peter Salas <[email protected]> Date: Wed Aug 28 20:24:31 2024 -0700 [VLM][Core] Fix exceptions on ragged NestedTensors (vllm-project#7974) commit a7f65c2 Author: youkaichao <[email protected]> Date: Wed Aug 28 17:32:26 2024 -0700 [torch.compile] remove reset (vllm-project#7975) commit 4289cad Author: Nick Hill <[email protected]> Date: Wed Aug 28 17:22:43 2024 -0700 [Frontend] Minor optimizations to zmq decoupled front-end (vllm-project#7957) Co-authored-by: Robert Shaw <rshaw@neuralmagic> commit af59df0 Author: Michael Goin <[email protected]> Date: Wed Aug 28 19:19:17 2024 -0400 Remove faulty Meta-Llama-3-8B-Instruct-FP8.yaml lm-eval test (vllm-project#7961) commit ce6bf3a Author: youkaichao <[email protected]> Date: Wed Aug 28 16:10:12 2024 -0700 [torch.compile] avoid Dynamo guard evaluation overhead (vllm-project#7898) Co-authored-by: Woosuk Kwon <[email protected]> commit 3cdfe1f Author: bnellnm <[email protected]> Date: Wed Aug 28 18:11:49 2024 -0400 [Bugfix] Make torch registration of punica ops optional (vllm-project#7970) commit fdd9daa Author: Mor Zusman <[email protected]> Date: Thu Aug 29 01:06:52 2024 +0300 [Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM (vllm-project#7651) commit 8c56e57 Author: Stas Bekman <[email protected]> Date: Wed Aug 28 13:54:23 2024 -0700 [Doc] fix 404 link (vllm-project#7966) commit eeffde1 Author: Woosuk Kwon <[email protected]> Date: Wed Aug 28 13:10:21 2024 -0700 [TPU] Upgrade PyTorch XLA nightly (vllm-project#7967) commit e5697d1 Author: rasmith <[email protected]> Date: Wed Aug 28 14:37:47 2024 -0500 [Kernel] [Triton] [AMD] Adding Triton implementations awq_dequantize and awq_gemm to support AWQ (vllm-project#7386) commit b98cc28 Author: Pavani Majety <[email protected]> Date: Wed Aug 28 10:01:22 2024 -0700 [Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available. (vllm-project#7798) Co-authored-by: Simon Mo <[email protected]> commit ef9baee Author: Cyrus Leung <[email protected]> Date: Wed Aug 28 23:11:18 2024 +0800 [Bugfix][VLM] Fix incompatibility between vllm-project#7902 and vllm-project#7230 (vllm-project#7948) commit 98c12cf Author: Stas Bekman <[email protected]> Date: Wed Aug 28 05:12:32 2024 -0700 [Doc] fix the autoAWQ example (vllm-project#7937) commit f52a43a Author: youkaichao <[email protected]> Date: Wed Aug 28 01:27:07 2024 -0700 [ci][test] fix pp test failure (vllm-project#7945) commit e358053 Author: Cody Yu <[email protected]> Date: Wed Aug 28 00:36:31 2024 -0700 [Performance] Enable chunked prefill and prefix caching together (vllm-project#7753)
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
will this work for pixtral as well? |
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Michael Goin <[email protected]> Signed-off-by: Alvant <[email protected]>
Co-authored-by: Michael Goin <[email protected]> Signed-off-by: Amit Garg <[email protected]>
Will this work for the Llama original formats, which, e.g., are in the formats of |
Co-authored-by: Michael Goin <[email protected]>
Mistral models are usually uploaded in two formats:
The original consolidated format:
consolidated.safetensors
params.json
A bit later the HF format is usually also uploaded:
model-00001-of-00003.safetensors
config.json
This PR allows to load directly from the original consolidated format which should make it easier to directly get new models working in VLLM.
The PR can be tested with:
or:
BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE
PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]
for bug fixes.[CI/Build]
for build or continuous integration improvements.[Doc]
for documentation fixes and improvements.[Model]
for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]
For changes on the vLLM frontend (e.g., OpenAI API server,LLM
class, etc.)[Kernel]
for changes affecting CUDA kernels or other compute kernels.[Core]
for changes in the core vLLM logic (e.g.,LLMEngine
,AsyncLLMEngine
,Scheduler
, etc.)[Hardware][Vendor]
for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]
).[Misc]
for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.sh
to format your code.docs/source/
if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-required
and might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-required
label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!